226 research outputs found

    Virtual reality for 3D histology: multi-scale visualization of organs with interactive feature exploration

    Get PDF
    Virtual reality (VR) enables data visualization in an immersive and engaging manner, and it can be used for creating ways to explore scientific data. Here, we use VR for visualization of 3D histology data, creating a novel interface for digital pathology. Our contribution includes 3D modeling of a whole organ and embedded objects of interest, fusing the models with associated quantitative features and full resolution serial section patches, and implementing the virtual reality application. Our VR application is multi-scale in nature, covering two object levels representing different ranges of detail, namely organ level and sub-organ level. In addition, the application includes several data layers, including the measured histology image layer and multiple representations of quantitative features computed from the histology. In this interactive VR application, the user can set visualization properties, select different samples and features, and interact with various objects. In this work, we used whole mouse prostates (organ level) with prostate cancer tumors (sub-organ objects of interest) as example cases, and included quantitative histological features relevant for tumor biology in the VR model. Due to automated processing of the histology data, our application can be easily adopted to visualize other organs and pathologies from various origins. Our application enables a novel way for exploration of high-resolution, multidimensional data for biomedical research purposes, and can also be used in teaching and researcher training

    Deformation equivariant cross-modality image synthesis with paired non-aligned training data

    Full text link
    Cross-modality image synthesis is an active research topic with multiple medical clinically relevant applications. Recently, methods allowing training with paired but misaligned data have started to emerge. However, no robust and well-performing methods applicable to a wide range of real world data sets exist. In this work, we propose a generic solution to the problem of cross-modality image synthesis with paired but non-aligned data by introducing new deformation equivariance encouraging loss functions. The method consists of joint training of an image synthesis network together with separate registration networks and allows adversarial training conditioned on the input even with misaligned data. The work lowers the bar for new clinical applications by allowing effortless training of cross-modality image synthesis networks for more difficult data sets

    Convolutional Neural Network-Based Artificial Intelligence for Classification of Protein Localization Patterns

    Get PDF
    Identifying localization of proteins and their specific subpopulations associated with certain cellular compartments is crucial for understanding protein function and interactions with other macromolecules. Fluorescence microscopy is a powerful method to assess protein localizations, with increasing demand of automated high throughput analysis methods to supplement the technical advancements in high throughput imaging. Here, we study the applicability of deep neural network-based artificial intelligence in classification of protein localization in 13 cellular subcompartments. We use deep learning-based on convolutional neural network and fully convolutional network with similar architectures for the classification task, aiming at achieving accurate classification, but importantly, also comparison of the networks. Our results show that both types of convolutional neural networks perform well in protein localization classification tasks for major cellular organelles. Yet, in this study, the fully convolutional network outperforms the convolutional neural network in classification of images with multiple simultaneous protein localizations. We find that the fully convolutional network, using output visualizing the identified localizations, is a very useful tool for systematic protein localization assessment

    Simulation of microarray data with realistic characteristics

    Get PDF
    BACKGROUND: Microarray technologies have become common tools in biological research. As a result, a need for effective computational methods for data analysis has emerged. Numerous different algorithms have been proposed for analyzing the data. However, an objective evaluation of the proposed algorithms is not possible due to the lack of biological ground truth information. To overcome this fundamental problem, the use of simulated microarray data for algorithm validation has been proposed. RESULTS: We present a microarray simulation model which can be used to validate different kinds of data analysis algorithms. The proposed model is unique in the sense that it includes all the steps that affect the quality of real microarray data. These steps include the simulation of biological ground truth data, applying biological and measurement technology specific error models, and finally simulating the microarray slide manufacturing and hybridization. After all these steps are taken into account, the simulated data has realistic biological and statistical characteristics. The applicability of the proposed model is demonstrated by several examples. CONCLUSION: The proposed microarray simulation model is modular and can be used in different kinds of applications. It includes several error models that have been proposed earlier and it can be used with different types of input data. The model can be used to simulate both spotted two-channel and oligonucleotide based single-channel microarrays. All this makes the model a valuable tool for example in validation of data analysis algorithms

    Unstained Tissue Imaging and Virtual Hematoxylin and Eosin Staining of Histologic Whole Slide Images

    Get PDF
    Tissue structures, phenotypes, and pathology are routinely investigated based on histology. This includes chemically staining the transparent tissue sections to make them visible to the human eye. Although chemical staining is fast and routine, it permanently alters the tissue and often consumes hazardous reagents. On the other hand, on using adjacent tissue sections for combined measurements, the cell-wise resolution is lost owing to sections representing different parts of the tissue. Hence, techniques providing visual information of the basic tissue structure enabling additional measurements from the exact same tissue section are required. Here we tested unstained tissue imaging for the development of computational hematoxylin and eosin (HE) staining. We used unsupervised deep learning (CycleGAN) and whole slide images of prostate tissue sections to compare the performance of imaging tissue in paraffin, as deparaffinized in air, and as deparaffinized in mounting medium with section thicknesses varying between 3 and 20 μm. We showed that although thicker sections increase the information content of tissue structures in the images, thinner sections generally perform better in providing information that can be reproduced in virtual staining. According to our results, tissue imaged in paraffin and as deparaffinized provides a good overall representation of the tissue for virtually HE-stained images. Further, using a pix2pix model, we showed that the reproduction of overall tissue histology can be clearly improved with image-to-image translation using supervised learning and pixel-wise ground truth. We also showed that virtual HE staining can be used for various tissues and used with both 20× and 40× imaging magnifications. Although the performance and methods of virtual staining need further development, our study provides evidence of the feasibility of whole slide unstained microscopy as a fast, cheap, and feasible approach to producing virtual staining of tissue histology while sparing the exact same tissue section ready for subsequent utilization with follow-up methods at single-cell resolution.publishedVersionPeer reviewe

    Improving Performance in Colorectal Cancer Histology Decomposition using Deep and Ensemble Machine Learning

    Full text link
    In routine colorectal cancer management, histologic samples stained with hematoxylin and eosin are commonly used. Nonetheless, their potential for defining objective biomarkers for patient stratification and treatment selection is still being explored. The current gold standard relies on expensive and time-consuming genetic tests. However, recent research highlights the potential of convolutional neural networks (CNNs) in facilitating the extraction of clinically relevant biomarkers from these readily available images. These CNN-based biomarkers can predict patient outcomes comparably to golden standards, with the added advantages of speed, automation, and minimal cost. The predictive potential of CNN-based biomarkers fundamentally relies on the ability of convolutional neural networks (CNNs) to classify diverse tissue types from whole slide microscope images accurately. Consequently, enhancing the accuracy of tissue class decomposition is critical to amplifying the prognostic potential of imaging-based biomarkers. This study introduces a hybrid Deep and ensemble machine learning model that surpassed all preceding solutions for this classification task. Our model achieved 96.74% accuracy on the external test set and 99.89% on the internal test set. Recognizing the potential of these models in advancing the task, we have made them publicly available for further research and development.Comment: 28 pages, 9 figure

    Generalized fixation invariant nuclei detection through domain adaptation based deep learning

    Get PDF
    Nucleus detection is a fundamental task in histological image analysis and an important tool for many follow up analyses. It is known that sample preparation and scanning procedure of histological slides introduce a great amount of variability to the histological images and poses challenges for automated nucleus detection. Here, we studied the effect of histopathological sample fixation on the accuracy of a deep learning based nuclei detection model trained with hematoxylin and eosin stained images. We experimented with training data that includes three methods of fixation; PAXgene, formalin and frozen, and studied the detection accuracy results of various convolutional neural networks. Our results indicate that the variability introduced during sample preparation affects the generalization of a model and should be considered when building accurate and robust nuclei detection algorithms. Our dataset includes over 67 000 annotated nuclei locations from 16 patients and three different sample fixation types. The dataset provides excellent basis for building an accurate and robust nuclei detection model, and combined with unsupervised domain adaptation, the workflow allows generalization to images from unseen domains, including different tissues and images from different labs.publishedVersionPeer reviewe

    Generalized fixation invariant nuclei detection through domain adaptation based deep learning

    Get PDF
    Nucleus detection is a fundamental task in histological image analysis and an important tool for many follow up analyses. It is known that sample preparation and scanning procedure of histological slides introduce a great amount of variability to the histological images and poses challenges for automated nucleus detection. Here, we studied the effect of histopathological sample fixation on the accuracy of a deep learning based nuclei detection model trained with hematoxylin and eosin stained images. We experimented with training data that includes three methods of fixation; PAXgene, formalin and frozen, and studied the detection accuracy results of various convolutional neural networks. Our results indicate that the variability introduced during sample preparation affects the generalization of a model and should be considered when building accurate and robust nuclei detection algorithms. Our dataset includes over 67 000 annotated nuclei locations from 16 patients and three different sample fixation types. The dataset provides excellent basis for building an accurate and robust nuclei detection model, and combined with unsupervised domain adaptation, the workflow allows generalization to images from unseen domains, including different tissues and images from different labs

    H&E Multi-Laboratory Staining Variance Exploration with Machine Learning

    Get PDF
    In diagnostic histopathology, hematoxylin and eosin (H&E) staining is a critical process that highlights salient histological features. Staining results vary between laboratories regardless of the histopathological task, although the method does not change. This variance can impair the accuracy of algorithms and histopathologists' time-to-insight. Investigating this variance can help calibrate stain normalization tasks to reverse this negative potential. With machine learning, this study evaluated the staining variance between different laboratories on three tissue types. We received H&E-stained slides from 66 different laboratories. Each slide contained kidney, skin, and colon tissue samples stained by the method routinely used in each laboratory. The samples were digitized and summarized as red, green, and blue channel histograms. Dimensions were reduced using principal component analysis. The data projected by principal components were inserted into the k-means clustering algorithm and the k-nearest neighbors classifier with the laboratories as the target. The k-means silhouette index indicated that K = 2 clusters had the best separability in all tissue types. The supervised classification result showed laboratory effects and tissue-type bias. Both supervised and unsupervised approaches suggested that tissue type also affected inter-laboratory variance. We suggest tissue type to also be considered upon choosing the staining and color-normalization approach

    The effect of neural network architecture on virtual H&E staining : Systematic assessment of histological feasibility

    Get PDF
    Conventional histopathology has relied on chemical staining for over a century. The staining process makes tissue sections visible to the human eye through a tedious and labor-intensive procedure that alters the tissue irreversibly, preventing repeated use of the sample. Deep learning-based virtual staining can potentially alleviate these shortcomings. Here, we used standard brightfield microscopy on unstained tissue sections and studied the impact of increased network capacity on the resulting virtually stained H&E images. Using the generative adversarial neural network model pix2pix as a baseline, we observed that replacing simple convolutions with dense convolution units increased the structural similarity score, peak signal-to-noise ratio, and nuclei reproduction accuracy. We also demonstrated highly accurate reproduction of histology, especially with increased network capacity, and demonstrated applicability to several tissues. We show that network architecture optimization can improve the image translation accuracy of virtual H&E staining, highlighting the potential of virtual staining in streamlining histopathological analysis.publishedVersionPeer reviewe
    • …
    corecore